hello
hello
Labels

📌S Retain class distribution for seed 2:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 2:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990

📊 Updated class distribution:
Retain set:
  Class 0: 4750
  Class 1: 4750
  Class 2: 4750
  Class 3: 4750
  Class 4: 4750
  Class 5: 4750
  Class 6: 4750
  Class 7: 4750
  Class 8: 4750
  Class 9: 4750
Forget set:
  Class 0: 250
  Class 1: 250
  Class 2: 250
  Class 3: 250
  Class 4: 250
  Class 5: 250
  Class 6: 250
  Class 7: 250
  Class 8: 250
  Class 9: 250
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/47500]	Loss: 2.4348	LR: 0.000000
Training Epoch: 1 [512/47500]	Loss: 2.4299	LR: 0.000538
Training Epoch: 1 [768/47500]	Loss: 2.4711	LR: 0.001075
Training Epoch: 1 [1024/47500]	Loss: 2.3795	LR: 0.001613
Training Epoch: 1 [1280/47500]	Loss: 2.3018	LR: 0.002151
Training Epoch: 1 [1536/47500]	Loss: 2.2653	LR: 0.002688
Training Epoch: 1 [1792/47500]	Loss: 2.2205	LR: 0.003226
Training Epoch: 1 [2048/47500]	Loss: 2.2316	LR: 0.003763
Training Epoch: 1 [2304/47500]	Loss: 2.2586	LR: 0.004301
Training Epoch: 1 [2560/47500]	Loss: 2.1414	LR: 0.004839
Training Epoch: 1 [2816/47500]	Loss: 2.1691	LR: 0.005376
Training Epoch: 1 [3072/47500]	Loss: 2.1657	LR: 0.005914
Training Epoch: 1 [3328/47500]	Loss: 2.2355	LR: 0.006452
Training Epoch: 1 [3584/47500]	Loss: 2.0091	LR: 0.006989
Training Epoch: 1 [3840/47500]	Loss: 1.8750	LR: 0.007527
Training Epoch: 1 [4096/47500]	Loss: 1.9827	LR: 0.008065
Training Epoch: 1 [4352/47500]	Loss: 1.9934	LR: 0.008602
Training Epoch: 1 [4608/47500]	Loss: 1.8992	LR: 0.009140
Training Epoch: 1 [4864/47500]	Loss: 2.0205	LR: 0.009677
Training Epoch: 1 [5120/47500]	Loss: 1.9442	LR: 0.010215
Training Epoch: 1 [5376/47500]	Loss: 1.8522	LR: 0.010753
Training Epoch: 1 [5632/47500]	Loss: 1.7688	LR: 0.011290
Training Epoch: 1 [5888/47500]	Loss: 1.9399	LR: 0.011828
Training Epoch: 1 [6144/47500]	Loss: 1.7749	LR: 0.012366
Training Epoch: 1 [6400/47500]	Loss: 1.7531	LR: 0.012903
Training Epoch: 1 [6656/47500]	Loss: 1.5969	LR: 0.013441
Training Epoch: 1 [6912/47500]	Loss: 1.7345	LR: 0.013978
Training Epoch: 1 [7168/47500]	Loss: 1.8155	LR: 0.014516
Training Epoch: 1 [7424/47500]	Loss: 1.6406	LR: 0.015054
Training Epoch: 1 [7680/47500]	Loss: 1.7050	LR: 0.015591
Training Epoch: 1 [7936/47500]	Loss: 1.6102	LR: 0.016129
Training Epoch: 1 [8192/47500]	Loss: 1.6459	LR: 0.016667
Training Epoch: 1 [8448/47500]	Loss: 1.6953	LR: 0.017204
Training Epoch: 1 [8704/47500]	Loss: 1.5805	LR: 0.017742
Training Epoch: 1 [8960/47500]	Loss: 1.7571	LR: 0.018280
Training Epoch: 1 [9216/47500]	Loss: 1.6696	LR: 0.018817
Training Epoch: 1 [9472/47500]	Loss: 1.8273	LR: 0.019355
Training Epoch: 1 [9728/47500]	Loss: 1.7225	LR: 0.019892
Training Epoch: 1 [9984/47500]	Loss: 1.6821	LR: 0.020430
Training Epoch: 1 [10240/47500]	Loss: 1.7278	LR: 0.020968
Training Epoch: 1 [10496/47500]	Loss: 1.6065	LR: 0.021505
Training Epoch: 1 [10752/47500]	Loss: 1.6800	LR: 0.022043
Training Epoch: 1 [11008/47500]	Loss: 1.8168	LR: 0.022581
Training Epoch: 1 [11264/47500]	Loss: 1.6145	LR: 0.023118
Training Epoch: 1 [11520/47500]	Loss: 1.7995	LR: 0.023656
Training Epoch: 1 [11776/47500]	Loss: 1.7308	LR: 0.024194
Training Epoch: 1 [12032/47500]	Loss: 1.6259	LR: 0.024731
Training Epoch: 1 [12288/47500]	Loss: 1.6581	LR: 0.025269
Training Epoch: 1 [12544/47500]	Loss: 1.6698	LR: 0.025806
Training Epoch: 1 [12800/47500]	Loss: 1.7112	LR: 0.026344
Training Epoch: 1 [13056/47500]	Loss: 1.5186	LR: 0.026882
Training Epoch: 1 [13312/47500]	Loss: 1.6237	LR: 0.027419
Training Epoch: 1 [13568/47500]	Loss: 1.5221	LR: 0.027957
Training Epoch: 1 [13824/47500]	Loss: 1.6321	LR: 0.028495
Training Epoch: 1 [14080/47500]	Loss: 1.5259	LR: 0.029032
Training Epoch: 1 [14336/47500]	Loss: 1.6453	LR: 0.029570
Training Epoch: 1 [14592/47500]	Loss: 1.5568	LR: 0.030108
Training Epoch: 1 [14848/47500]	Loss: 1.5331	LR: 0.030645
Training Epoch: 1 [15104/47500]	Loss: 1.5936	LR: 0.031183
Training Epoch: 1 [15360/47500]	Loss: 1.6459	LR: 0.031720
Training Epoch: 1 [15616/47500]	Loss: 1.6796	LR: 0.032258
Training Epoch: 1 [15872/47500]	Loss: 1.6155	LR: 0.032796
Training Epoch: 1 [16128/47500]	Loss: 1.7870	LR: 0.033333
Training Epoch: 1 [16384/47500]	Loss: 1.5786	LR: 0.033871
Training Epoch: 1 [16640/47500]	Loss: 1.6431	LR: 0.034409
Training Epoch: 1 [16896/47500]	Loss: 1.6245	LR: 0.034946
Training Epoch: 1 [17152/47500]	Loss: 1.5334	LR: 0.035484
Training Epoch: 1 [17408/47500]	Loss: 1.6991	LR: 0.036022
Training Epoch: 1 [17664/47500]	Loss: 1.4668	LR: 0.036559
Training Epoch: 1 [17920/47500]	Loss: 1.6351	LR: 0.037097
Training Epoch: 1 [18176/47500]	Loss: 1.4743	LR: 0.037634
Training Epoch: 1 [18432/47500]	Loss: 1.4624	LR: 0.038172
Training Epoch: 1 [18688/47500]	Loss: 1.4479	LR: 0.038710
Training Epoch: 1 [18944/47500]	Loss: 1.6396	LR: 0.039247
Training Epoch: 1 [19200/47500]	Loss: 1.6905	LR: 0.039785
Training Epoch: 1 [19456/47500]	Loss: 1.5500	LR: 0.040323
Training Epoch: 1 [19712/47500]	Loss: 1.5875	LR: 0.040860
Training Epoch: 1 [19968/47500]	Loss: 1.5372	LR: 0.041398
Training Epoch: 1 [20224/47500]	Loss: 1.5283	LR: 0.041935
Training Epoch: 1 [20480/47500]	Loss: 1.6574	LR: 0.042473
Training Epoch: 1 [20736/47500]	Loss: 1.4534	LR: 0.043011
Training Epoch: 1 [20992/47500]	Loss: 1.5328	LR: 0.043548
Training Epoch: 1 [21248/47500]	Loss: 1.5970	LR: 0.044086
Training Epoch: 1 [21504/47500]	Loss: 1.4083	LR: 0.044624
Training Epoch: 1 [21760/47500]	Loss: 1.5570	LR: 0.045161
Training Epoch: 1 [22016/47500]	Loss: 1.2888	LR: 0.045699
Training Epoch: 1 [22272/47500]	Loss: 1.4902	LR: 0.046237
Training Epoch: 1 [22528/47500]	Loss: 1.4450	LR: 0.046774
Training Epoch: 1 [22784/47500]	Loss: 1.4239	LR: 0.047312
Training Epoch: 1 [23040/47500]	Loss: 1.4615	LR: 0.047849
Training Epoch: 1 [23296/47500]	Loss: 1.3813	LR: 0.048387
Training Epoch: 1 [23552/47500]	Loss: 1.4156	LR: 0.048925
Training Epoch: 1 [23808/47500]	Loss: 1.3972	LR: 0.049462
Training Epoch: 1 [24064/47500]	Loss: 1.4139	LR: 0.050000
Training Epoch: 1 [24320/47500]	Loss: 1.4207	LR: 0.050538
Training Epoch: 1 [24576/47500]	Loss: 1.3394	LR: 0.051075
Training Epoch: 1 [24832/47500]	Loss: 1.3681	LR: 0.051613
Training Epoch: 1 [25088/47500]	Loss: 1.3081	LR: 0.052151
Training Epoch: 1 [25344/47500]	Loss: 1.4511	LR: 0.052688
Training Epoch: 1 [25600/47500]	Loss: 1.3976	LR: 0.053226
Training Epoch: 1 [25856/47500]	Loss: 1.4030	LR: 0.053763
Training Epoch: 1 [26112/47500]	Loss: 1.2373	LR: 0.054301
Training Epoch: 1 [26368/47500]	Loss: 1.3971	LR: 0.054839
Training Epoch: 1 [26624/47500]	Loss: 1.4202	LR: 0.055376
Training Epoch: 1 [26880/47500]	Loss: 1.5122	LR: 0.055914
Training Epoch: 1 [27136/47500]	Loss: 1.4960	LR: 0.056452
Training Epoch: 1 [27392/47500]	Loss: 1.3270	LR: 0.056989
Training Epoch: 1 [27648/47500]	Loss: 1.6159	LR: 0.057527
Training Epoch: 1 [27904/47500]	Loss: 1.5199	LR: 0.058065
Training Epoch: 1 [28160/47500]	Loss: 1.4748	LR: 0.058602
Training Epoch: 1 [28416/47500]	Loss: 1.2854	LR: 0.059140
Training Epoch: 1 [28672/47500]	Loss: 1.4616	LR: 0.059677
Training Epoch: 1 [28928/47500]	Loss: 1.4493	LR: 0.060215
Training Epoch: 1 [29184/47500]	Loss: 1.3440	LR: 0.060753
Training Epoch: 1 [29440/47500]	Loss: 1.4638	LR: 0.061290
Training Epoch: 1 [29696/47500]	Loss: 1.4809	LR: 0.061828
Training Epoch: 1 [29952/47500]	Loss: 1.3805	LR: 0.062366
Training Epoch: 1 [30208/47500]	Loss: 1.4364	LR: 0.062903
Training Epoch: 1 [30464/47500]	Loss: 1.1613	LR: 0.063441
Training Epoch: 1 [30720/47500]	Loss: 1.2451	LR: 0.063978
Training Epoch: 1 [30976/47500]	Loss: 1.3397	LR: 0.064516
Training Epoch: 1 [31232/47500]	Loss: 1.3004	LR: 0.065054
Training Epoch: 1 [31488/47500]	Loss: 1.2468	LR: 0.065591
Training Epoch: 1 [31744/47500]	Loss: 1.2759	LR: 0.066129
Training Epoch: 1 [32000/47500]	Loss: 1.3042	LR: 0.066667
Training Epoch: 1 [32256/47500]	Loss: 1.4055	LR: 0.067204
Training Epoch: 1 [32512/47500]	Loss: 1.4596	LR: 0.067742
Training Epoch: 1 [32768/47500]	Loss: 1.2095	LR: 0.068280
Training Epoch: 1 [33024/47500]	Loss: 1.2877	LR: 0.068817
Training Epoch: 1 [33280/47500]	Loss: 1.4289	LR: 0.069355
Training Epoch: 1 [33536/47500]	Loss: 1.1444	LR: 0.069892
Training Epoch: 1 [33792/47500]	Loss: 1.3139	LR: 0.070430
Training Epoch: 1 [34048/47500]	Loss: 1.2798	LR: 0.070968
Training Epoch: 1 [34304/47500]	Loss: 1.2784	LR: 0.071505
Training Epoch: 1 [34560/47500]	Loss: 1.2651	LR: 0.072043
Training Epoch: 1 [34816/47500]	Loss: 1.6035	LR: 0.072581
Training Epoch: 1 [35072/47500]	Loss: 1.3668	LR: 0.073118
Training Epoch: 1 [35328/47500]	Loss: 1.3733	LR: 0.073656
Training Epoch: 1 [35584/47500]	Loss: 1.2564	LR: 0.074194
Training Epoch: 1 [35840/47500]	Loss: 1.2486	LR: 0.074731
Training Epoch: 1 [36096/47500]	Loss: 1.3818	LR: 0.075269
Training Epoch: 1 [36352/47500]	Loss: 1.4233	LR: 0.075806
Training Epoch: 1 [36608/47500]	Loss: 1.4637	LR: 0.076344
Training Epoch: 1 [36864/47500]	Loss: 1.4226	LR: 0.076882
Training Epoch: 1 [37120/47500]	Loss: 1.4406	LR: 0.077419
Training Epoch: 1 [37376/47500]	Loss: 1.4926	LR: 0.077957
Training Epoch: 1 [37632/47500]	Loss: 1.4696	LR: 0.078495
Training Epoch: 1 [37888/47500]	Loss: 1.2460	LR: 0.079032
Training Epoch: 1 [38144/47500]	Loss: 1.3815	LR: 0.079570
Training Epoch: 1 [38400/47500]	Loss: 1.3354	LR: 0.080108
Training Epoch: 1 [38656/47500]	Loss: 1.2795	LR: 0.080645
Training Epoch: 1 [38912/47500]	Loss: 1.3417	LR: 0.081183
Training Epoch: 1 [39168/47500]	Loss: 1.4249	LR: 0.081720
Training Epoch: 1 [39424/47500]	Loss: 1.2362	LR: 0.082258
Training Epoch: 1 [39680/47500]	Loss: 1.4586	LR: 0.082796
Training Epoch: 1 [39936/47500]	Loss: 1.4071	LR: 0.083333
Training Epoch: 1 [40192/47500]	Loss: 1.2380	LR: 0.083871
Training Epoch: 1 [40448/47500]	Loss: 1.4750	LR: 0.084409
Training Epoch: 1 [40704/47500]	Loss: 1.6424	LR: 0.084946
Training Epoch: 1 [40960/47500]	Loss: 1.3076	LR: 0.085484
Training Epoch: 1 [41216/47500]	Loss: 1.4842	LR: 0.086022
Training Epoch: 1 [41472/47500]	Loss: 1.3106	LR: 0.086559
Training Epoch: 1 [41728/47500]	Loss: 1.2684	LR: 0.087097
Training Epoch: 1 [41984/47500]	Loss: 1.2200	LR: 0.087634
Training Epoch: 1 [42240/47500]	Loss: 1.7186	LR: 0.088172
Training Epoch: 1 [42496/47500]	Loss: 1.3434	LR: 0.088710
Training Epoch: 1 [42752/47500]	Loss: 1.3012	LR: 0.089247
Training Epoch: 1 [43008/47500]	Loss: 1.3221	LR: 0.089785
Training Epoch: 1 [43264/47500]	Loss: 1.1748	LR: 0.090323
Training Epoch: 1 [43520/47500]	Loss: 1.4382	LR: 0.090860
Training Epoch: 1 [43776/47500]	Loss: 1.2547	LR: 0.091398
Training Epoch: 1 [44032/47500]	Loss: 1.3799	LR: 0.091935
Training Epoch: 1 [44288/47500]	Loss: 1.3723	LR: 0.092473
Training Epoch: 1 [44544/47500]	Loss: 1.1897	LR: 0.093011
Training Epoch: 1 [44800/47500]	Loss: 1.2687	LR: 0.093548
Training Epoch: 1 [45056/47500]	Loss: 1.2684	LR: 0.094086
Training Epoch: 1 [45312/47500]	Loss: 1.3408	LR: 0.094624
Training Epoch: 1 [45568/47500]	Loss: 1.2207	LR: 0.095161
Training Epoch: 1 [45824/47500]	Loss: 1.3191	LR: 0.095699
Training Epoch: 1 [46080/47500]	Loss: 1.2240	LR: 0.096237
Training Epoch: 1 [46336/47500]	Loss: 1.1783	LR: 0.096774
Training Epoch: 1 [46592/47500]	Loss: 1.1954	LR: 0.097312
Training Epoch: 1 [46848/47500]	Loss: 1.1995	LR: 0.097849
Training Epoch: 1 [47104/47500]	Loss: 1.0767	LR: 0.098387
Training Epoch: 1 [47360/47500]	Loss: 1.2365	LR: 0.098925
Training Epoch: 1 [47500/47500]	Loss: 1.1015	LR: 0.099462
Epoch 1 - Average Train Loss: 1.5430, Train Accuracy: 0.4428
Epoch 1 training time consumed: 18.72s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0063, Accuracy: 0.5065, Time consumed:0.91s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_16h_20m_59s/ResNet18-Cifar10-seed2-ret50-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  47500
Forget Train Dl:  2500
Retain Valid Dl:  47500
Forget Valid Dl:  2500
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 50.48828125
Retain Accuracy: 51.97724533081055
Zero-Retain Forget (ZRF): 0.8924700617790222
Membership Inference Attack (MIA): 0.4328
Forget vs Retain Membership Inference Attack (MIA): 0.528
Forget vs Test Membership Inference Attack (MIA): 0.509
Test vs Retain Membership Inference Attack (MIA): 0.60175
Train vs Test Membership Inference Attack (MIA): 0.5105
Forget Set Accuracy (Df): 51.293846130371094
Method Execution Time: 909.23 seconds
